neural execution engine
Neural Execution Engines: Learning to Execute Subroutines
A significant effort has been made to train neural networks that replicate algorithmic reasoning, but they often fail to learn the abstract concepts underlying these algorithms. This is evidenced by their inability to generalize to data distributions that are outside of their restricted training sets, namely larger inputs and unseen data. We study these generalization issues at the level of numerical subroutines that comprise common algorithms like sorting, shortest paths, and minimum spanning trees. First, we observe that transformer-based sequence-to-sequence models can learn subroutines like sorting a list of numbers, but their performance rapidly degrades as the length of lists grows beyond those found in the training set. We demonstrate that this is due to attention weights that lose fidelity with longer sequences, particularly when the input numbers are numerically similar. To address the issue, we propose a learned conditional masking mechanism, which enables the model to strongly generalize far outside of its training range with near-perfect accuracy on a variety of algorithms. Second, to generalize to unseen data, we show that encoding numbers with a binary representation leads to embeddings with rich structure once trained on downstream tasks like addition or multiplication. This allows the embedding to handle missing data by faithfully interpolating numbers not seen during training.
Review for NeurIPS paper: Neural Execution Engines: Learning to Execute Subroutines
After reading each others reviews and discussing the author rebuttal, opinions on this submission sit around the borderline. The hesitation towards acceptance largely comes from a confusion around the key motivations of the work, the amount of important details residing in the supplement, and uncertainty around the relationship to prior work in program synthesis. The work focus on strong generalization in neural networks trained to perform algorithmic reasoning. The experiments focus on generalization in a fairly narrow domain -- algorithmic subroutines such as finding the minimum of a list, merging two sorted lists, taking a sum, etc. And the type of generalization examined is concerned with extending the length of the input list by scaling up the associated problem (sorting, MST, shortest path) and generalizing to never-before seen numbers or number combinations.
Review for NeurIPS paper: Neural Execution Engines: Learning to Execute Subroutines
Weaknesses: In general, I think the technical novelty of this work is limited. In particular, they claim that an additional mask prediction component is necessary to achieve generalization. My understanding is that the training supervision of NEE includes the desired mask at each execution step, which corresponds to the data pointers. However, it is unclear whether the training supervision of the baseline Transformer also includes the ground truth masks, or it only includes the output value at each step. Basically, I want to know whether the improvement comes from the more fine-grained supervision or the architectural design.
Neural Execution Engines: Learning to Execute Subroutines
A significant effort has been made to train neural networks that replicate algorithmic reasoning, but they often fail to learn the abstract concepts underlying these algorithms. This is evidenced by their inability to generalize to data distributions that are outside of their restricted training sets, namely larger inputs and unseen data. We study these generalization issues at the level of numerical subroutines that comprise common algorithms like sorting, shortest paths, and minimum spanning trees. First, we observe that transformer-based sequence-to-sequence models can learn subroutines like sorting a list of numbers, but their performance rapidly degrades as the length of lists grows beyond those found in the training set. We demonstrate that this is due to attention weights that lose fidelity with longer sequences, particularly when the input numbers are numerically similar.